Modern deep neural networks tend to be evaluated on static test sets. One shortcoming of this is the fact that these deep neural networks cannot be easily evaluated for robustness issues with respect to specific scene variations. For example, it is hard to study the robustness of these networks to variations of object scale, object pose, scene lighting and 3D occlusions. The main reason is that collecting real datasets with fine-grained naturalistic variations of sufficient scale can be extremely time-consuming and expensive. In this work, we present Counterfactual Simulation Testing, a counterfactual framework that allows us to study the robustness of neural networks with respect to some of these naturalistic variations by building realistic synthetic scenes that allow us to ask counterfactual questions to the models, ultimately providing answers to questions such as "Would your classification still be correct if the object were viewed from the top?" or "Would your classification still be correct if the object were partially occluded by another object?". Our method allows for a fair comparison of the robustness of recently released, state-of-the-art Convolutional Neural Networks and Vision Transformers, with respect to these naturalistic variations. We find evidence that ConvNext is more robust to pose and scale variations than Swin, that ConvNext generalizes better to our simulated domain and that Swin handles partial occlusion better than ConvNext. We also find that robustness for all networks improves with network scale and with data scale and variety. We release the Naturalistic Variation Object Dataset (NVD), a large simulated dataset of 272k images of everyday objects with naturalistic variations such as object pose, scale, viewpoint, lighting and occlusions. Project page: https://counterfactualsimulation.github.io
translated by 谷歌翻译
While transformers have greatly boosted performance in semantic segmentation, domain adaptive transformers are not yet well explored. We identify that the domain gap can cause discrepancies in self-attention. Due to this gap, the transformer attends to spurious regions or pixels, which deteriorates accuracy on the target domain. We propose to perform adaptation on attention maps with cross-domain attention layers that share features between the source and the target domains. Specifically, we impose consistency between predictions from cross-domain attention and self-attention modules to encourage similar distribution in the attention and output of the model across domains, i.e., attention-level and output-level alignment. We also enforce consistency in attention maps between different augmented views to further strengthen the attention-based alignment. Combining these two components, our method mitigates the discrepancy in attention maps across domains and further boosts the performance of the transformer under unsupervised domain adaptation settings. Our model outperforms the existing state-of-the-art baseline model on three widely used benchmarks, including GTAV-to-Cityscapes by 1.3 percent point (pp), Synthia-to-Cityscapes by 0.6 pp, and Cityscapes-to-ACDC by 1.1 pp, on average. Additionally, we verify the effectiveness and generalizability of our method through extensive experiments. Our code will be publicly available.
translated by 谷歌翻译
Prior work has shown that Visual Recognition datasets frequently underrepresent bias groups $B$ (\eg Female) within class labels $Y$ (\eg Programmers). This dataset bias can lead to models that learn spurious correlations between class labels and bias groups such as age, gender, or race. Most recent methods that address this problem require significant architectural changes or additional loss functions requiring more hyper-parameter tuning. Alternatively, data sampling baselines from the class imbalance literature (\eg Undersampling, Upweighting), which can often be implemented in a single line of code and often have no hyperparameters, offer a cheaper and more efficient solution. However, these methods suffer from significant shortcomings. For example, Undersampling drops a significant part of the input distribution while Oversampling repeats samples, causing overfitting. To address these shortcomings, we introduce a new class conditioned sampling method: Bias Mimicking. The method is based on the observation that if a class $c$ bias distribution, \ie $P_D(B|Y=c)$ is mimicked across every $c^{\prime}\neq c$, then $Y$ and $B$ are statistically independent. Using this notion, BM, through a novel training procedure, ensures that the model is exposed to the entire distribution without repeating samples. Consequently, Bias Mimicking improves underrepresented groups average accuracy of sampling methods by 3\% over four benchmarks while maintaining and sometimes improving performance over non sampling methods. Code can be found in https://github.com/mqraitem/Bias-Mimicking
translated by 谷歌翻译
基础模型(FMS)已证明了前所未有的功能,包括零拍学习,高保真数据合成和范围内的概括。但是,正如我们在本文中所显示的那样,FMS在专家任务上的开箱即用表现较差(例如,从语言查询中检索汽车手册技术插图),数据是看不见的,或者属于长尾的数据用于FM预训练的大型数据集的数据分布的一部分。这强调了在此类专家任务上明确评估和芬太尼FMS的必要性,这可以说是在实际现实世界中最重要的任务。在本文中,我们提出了围绕教授FMS了解技术文档的任务,通过学习将其图形插图与相应的语言描述相匹配的任务围绕着了解技术文档的任务。我们的FETA基准重点是公共汽车手册和销售目录手册中的文本对图像和图像到文本检索。 FETA配备了完全自动注释提取的程序(接受后将发布代码),从而使Feta轻松扩展到将来更多的文档类型和应用域。我们的自动注释导致自动性能指标显示,该指标与在人类策划注释中计算的指标一致(也发布)。我们提供多个基线和对FETA的流行FM的分析,从而导致一些有趣的发现,我们认为这对FM社区非常有价值,为现实世界中FMS应用于当前被标准基准的“忽视”的实践专家任务铺平了道路。在常见对象上。
translated by 谷歌翻译
最近的自我监督方法使用了大规模的图像文本数据集来学习强大的表示,这些表示无需填补即可将其转移到许多任务。这些方法通常假定其图像与其(简短)字幕之间存在一对一的对应关系。但是,许多任务需要有关多个图像和长文本叙述的推理,例如描述带有视觉摘要的新闻文章。因此,我们探索了一个新颖的环境,其目标是学习一个自我监督的视觉语言表示,该表示对改变文本长度和图像数量是可靠的。此外,与假设字幕的先前工作不同,我们假设图像仅包含与文本的宽松说明对应关系。为了探索这个问题,我们介绍了一个大规模的多模式数据集,其中包含31m文章,22m图像和1M视频。我们表明,对具有多个图像的更长叙述,最新的图像文本对齐方法并不强大。最后,我们介绍了一个直观的基线,该基线在GoodNews数据集上在零摄像集检索上胜过10%。
translated by 谷歌翻译
在低标签制度中,解决图像的多标签识别(MLR)是许多现实世界应用的一项艰巨任务。最近的工作学会了文本和视觉空间之间的一致性,以补偿图像标签不足,但由于可用的MLR注释量有限,因此失去了准确性。在这项工作中,我们利用数百万辅助图像文本对预测的文本和视觉特征的牢固对齐,并提出双背景优化(dualCoop)作为部分标签MLR和零发射MLR的统一框架。 DualCoop用类名来编码正面和负面的上下文,作为语言输入的一部分(即提示)。由于DualCoop仅在验证的视觉语言框架上引入了非常轻松的开销,因此它可以迅速适应具有有限的注释甚至看不见的类别的多标签识别任务。对两个挑战性低标签设置的标准多标签识别基准测试的实验证明了我们方法比最新方法的优势。
translated by 谷歌翻译
虽然姿势估计是一项重要的计算机视觉任务,但它需要昂贵的注释,并且遭受了域转移的困扰。在本文中,我们调查了域自适应2D姿势估计的问题,这些估计会传输有关合成源域的知识,而无需监督。尽管最近已经提出了几个领域的自适应姿势估计模型,但它们不是通用的,而是专注于人姿势或动物姿势估计,因此它们的有效性在某种程度上限于特定情况。在这项工作中,我们提出了一个统一的框架,该框架可以很好地推广到各种领域自适应姿势估计问题上。我们建议使用输入级别和输出级线索(分别是像素和姿势标签)对齐表示,这有助于知识转移从源域到未标记的目标域。我们的实验表明,我们的方法在各个领域变化下实现了最先进的性能。我们的方法的表现优于现有的姿势估计基线,最高4.5%(PP),手部姿势估算高达7.4 pp,狗的动物姿势估计高达4.8 pp,而绵羊的姿势估计为3.3 pp。这些结果表明,我们的方法能够减轻各种任务甚至看不见的域和物体的转移(例如,在马匹上训练并在狗上进行了测试)。我们的代码将在以下网址公开可用:https://github.com/visionlearninggroup/uda_poseestimation。
translated by 谷歌翻译
深层模型必须学习强大而可转移的表示形式,以便在新领域上表现良好。尽管已经提出了域转移方法(例如,域的适应性,域的概括)来学习跨域的可转移表示,但通常将它们应用于在Imagenet上预先训练的重置骨架。因此,现有作品很少关注预训练对域转移任务的影响。在本文中,我们对领域适应和泛化的预训练进行了广泛的研究和深入分析,即:网络体系结构,大小,训练损失和数据集。我们观察到,仅使用最先进的主链优于现有的最先进的域适应基线,并将新的基本线设置为Office-Home和Domainnet在10.7 \%和5.5 \%上提高。我们希望这项工作可以为未来的领域转移研究提供更多见解。
translated by 谷歌翻译
人类具有出色的能力来推理绑架并假设超出图像的字面内容的内容。通过识别散布在整个场景中的具体视觉线索,我们几乎不禁根据我们的日常经验和对世界的知识来提出可能的推论。例如,如果我们在道路旁边看到一个“ 20英里 /小时”的标志,我们可能会假设街道位于居民区(而不是在高速公路上),即使没有房屋。机器可以执行类似的视觉推理吗?我们提出了Sherlock,这是一个带注释的103K图像的语料库,用于测试机器能力,以超出字面图像内容的绑架推理。我们采用免费观看范式:参与者首先观察并识别图像中的显着线索(例如,对象,动作),然后给定线索,然后提供有关场景的合理推论。我们总共收集了363K(线索,推理)对,该对形成了首个绑架的视觉推理数据集。使用我们的语料库,我们测试了三个互补的绑架推理轴。我们评估模型的能力:i)从大型候选人语料库中检索相关推论; ii)通过边界框来定位推论的证据,iii)比较合理的推论,以匹配人类在新收集的19k李克特级判断的诊断语料库上的判断。尽管我们发现具有多任务目标的微调夹RN50x64优于强大的基准,但模型性能与人类一致之间存在着重要的净空。可在http://visualabduction.com/上获得数据,模型和排行榜
translated by 谷歌翻译
视觉语言导航(VLN)在其视觉环境中遵循语言指令,在该前提是输入命令在环境中完全可行的前提下进行了研究。然而,实际上,由于语言歧义或环境的变化,可能无法提出要求。为了使用未知命令可行性研究VLN,我们引入了一个新的数据集移动应用程序任务,并使用迭代反馈(Motif),目标是在移动应用程序中完成自然语言命令。移动应用程序提供了一个可扩展的域来研究VLN方法的下游用途。此外,移动应用命令为交互式导航提供了指令,因为它们通过单击,键入或刷新而导致状态更改的动作序列。主题是第一个包含可行性注释的主题,其中包含二进制可行性标签和细粒度标签,原因是为什么任务不满意。我们进一步收集了模棱两可的查询的后续问题,以使解决任务不确定性解决。配备了我们的数据集,我们提出了可行性预测的新问题,其中使用自然语言指令和多模式应用程序环境来预测命令的可行性。主题提供了一个更现实的应用数据集,因为它包含许多不同的环境,高级目标和更长的动作序列。我们使用主题评估交互式VLN方法,量化当前方法对新应用环境的概括能力,并衡量任务可行性对导航性能的影响。
translated by 谷歌翻译